392 research outputs found

    Understanding Epileptiform After-Discharges as Rhythmic Oscillatory Transients

    Get PDF
    Electro-cortical activity in patients with epilepsy may show abnormal rhythmic transients in response to stimulation. Even when using the same stimulation parameters in the same patient, wide variability in the duration of transient response has been reported. These transients have long been considered important for the mapping of the excitability levels in the epileptic brain but their dynamic mechanism is still not well understood. To understand the occurrence of abnormal transients dynamically, we use a thalamo-cortical neural population model of epileptic spike-wave activity and study the interaction between slow and fast subsystems. In a reduced version of the thalamo-cortical model, slow wave oscillations arise from a fold of cycles (FoC) bifurcation. This marks the onset of a region of bistability between a high amplitude oscillatory rhythm and the background state. In vicinity of the bistability in parameter space, the model has excitable dynamics, showing prolonged rhythmic transients in response to suprathreshold pulse stimulation. We analyse the state space geometry of the bistable and excitable states, and find that the rhythmic transient arises when the impending FoC bifurcation deforms the state space and creates an area of locally reduced attraction to the fixed point. This area essentially allows trajectories to dwell there before escaping to the stable steady state, thus creating rhythmic transients. In the full thalamo-cortical model, we find a similar FoC bifurcation structure. Based on the analysis, we propose an explanation of why stimulation induced epileptiform activity may vary between trials, and predict how the variability could be related to ongoing oscillatory background activity.Comment: http://journal.frontiersin.org/article/10.3389/fncom.2017.00025/ful

    End-to-End Audiovisual Fusion with LSTMs

    Full text link
    Several end-to-end deep learning approaches have been recently presented which simultaneously extract visual features from the input images and perform visual speech classification. However, research on jointly extracting audio and visual features and performing classification is very limited. In this work, we present an end-to-end audiovisual model based on Bidirectional Long Short-Term Memory (BLSTM) networks. To the best of our knowledge, this is the first audiovisual fusion model which simultaneously learns to extract features directly from the pixels and spectrograms and perform classification of speech and nonlinguistic vocalisations. The model consists of multiple identical streams, one for each modality, which extract features directly from mouth regions and spectrograms. The temporal dynamics in each stream/modality are modeled by a BLSTM and the fusion of multiple streams/modalities takes place via another BLSTM. An absolute improvement of 1.9% in the mean F1 of 4 nonlingusitic vocalisations over audio-only classification is reported on the AVIC database. At the same time, the proposed end-to-end audiovisual fusion system improves the state-of-the-art performance on the AVIC database leading to a 9.7% absolute increase in the mean F1 measure. We also perform audiovisual speech recognition experiments on the OuluVS2 database using different views of the mouth, frontal to profile. The proposed audiovisual system significantly outperforms the audio-only model for all views when the acoustic noise is high.Comment: Accepted to AVSP 2017. arXiv admin note: substantial text overlap with arXiv:1709.00443 and text overlap with arXiv:1701.0584

    Learning Local Metrics and Influential Regions for Classification

    Get PDF
    The performance of distance-based classifiers heavily depends on the underlying distance metric, so it is valuable to learn a suitable metric from the data. To address the problem of multimodality, it is desirable to learn local metrics. In this short paper, we define a new intuitive distance with local metrics and influential regions, and subsequently propose a novel local metric learning method for distance-based classification. Our key intuition is to partition the metric space into influential regions and a background region, and then regulate the effectiveness of each local metric to be within the related influential regions. We learn local metrics and influential regions to reduce the empirical hinge loss, and regularize the parameters on the basis of a resultant learning bound. Encouraging experimental results are obtained from various public and popular data sets

    Fractal and Multifractal Properties of Electrographic Recordings of Human Brain Activity: Toward Its Use as a Signal Feature for Machine Learning in Clinical Applications

    Get PDF
    The brain is a system operating on multiple time scales, and characterisation of dynamics across time scales remains a challenge. One framework to study such dynamics is that of fractal geometry. However, currently there exists no established method for the study of brain dynamics using fractal geometry, due to the many challenges in the conceptual and technical understanding of the methods. We aim to highlight some of the practical challenges of applying fractal geometry to brain dynamics and propose solutions to enable its wider use in neuroscience. Using intracranially recorded EEG and simulated data, we compared monofractal and multifractal methods with regards to their sensitivity to signal variance. We found that both correlate closely with signal variance, thus not offering new information about the signal. However, after applying an epoch-wise standardisation procedure to the signal, we found that multifractal measures could offer non-redundant information compared to signal variance, power and other established EEG signal measures. We also compared different multifractal estimation methods and found that the Chhabra-Jensen algorithm performed best. Finally, we investigated the impact of sampling frequency and epoch length on multifractal properties. Using epileptic seizures as an example event in the EEG, we show that there may be an optimal time scale for detecting temporal changes in multifractal properties around seizures. The practical issues we highlighted and our suggested solutions should help in developing a robust method for the application of fractal geometry in EEG signals. Our analyses and observations also aid the theoretical understanding of the multifractal properties of the brain and might provide grounds for new discoveries in the study of brain signals. These could be crucial for understanding of neurological function and for the developments of new treatments.Comment: Final version published at Frontiers in Physiology. https://doi.org/10.3389/fphys.2018.0176

    Dynamic Face Video Segmentation via Reinforcement Learning

    Full text link
    For real-time semantic video segmentation, most recent works utilised a dynamic framework with a key scheduler to make online key/non-key decisions. Some works used a fixed key scheduling policy, while others proposed adaptive key scheduling methods based on heuristic strategies, both of which may lead to suboptimal global performance. To overcome this limitation, we model the online key decision process in dynamic video segmentation as a deep reinforcement learning problem and learn an efficient and effective scheduling policy from expert information about decision history and from the process of maximising global return. Moreover, we study the application of dynamic video segmentation on face videos, a field that has not been investigated before. By evaluating on the 300VW dataset, we show that the performance of our reinforcement key scheduler outperforms that of various baselines in terms of both effective key selections and running speed. Further results on the Cityscapes dataset demonstrate that our proposed method can also generalise to other scenarios. To the best of our knowledge, this is the first work to use reinforcement learning for online key-frame decision in dynamic video segmentation, and also the first work on its application on face videos.Comment: CVPR 2020. 300VW with segmentation labels is available at: https://github.com/mapleandfire/300VW-Mas
    • …
    corecore